# Transfer Learning Optimization

PTBR GPT4 O NewsClassifier
Apache-2.0
Model fine-tuned based on bert-base-multilingual-cased, achieving 99.93% accuracy on the evaluation set
Large Language Model Transformers
P
gui8600k
25
1
Microsoft Finetuned Personality
MIT
This model predicts users' tendencies in the Big Five personality traits (OCEAN) by analyzing their responses to 20 questions, achieving 97% accuracy
Text Classification Transformers English
M
Nasserelsaman
453
10
Convnextv2 Large DogBreed
Apache-2.0
This model is a fine-tuned version of facebook/convnextv2-large-22k-224 on a dog breed classification dataset, achieving an accuracy of 91.39% on the evaluation set.
Image Classification Transformers
C
Pavarissy
184
6
Deepweeds
Apache-2.0
Image classification model fine-tuned based on google/vit-base-patch16-224-in21k, suitable for plant classification tasks
Image Classification Transformers
D
feisarx86
24
1
Plant Vit Model 1
Apache-2.0
A plant image classification model based on the ViT architecture, achieving 99.95% validation accuracy after fine-tuning on an unknown dataset
Image Classification Transformers
P
Carina124
89
1
Car Brands Classification
Apache-2.0
A pre-trained image classification model based on the BEiT architecture, supporting Vietnamese labels, suitable for vision tasks
Image Classification Transformers Other
C
lamnt2008
19
3
Swin Tiny Patch4 Window7 224 Finetuned Birds
Apache-2.0
A bird image classification model based on Swin Transformer architecture, fine-tuned on bird datasets with an accuracy of 82.15%
Image Classification Transformers
S
gjuggler
17
0
Beit Base Patch16 224 Pt22k Ft22k Finetuned Eurosat
Apache-2.0
An image classification model fine-tuned on the image folder dataset based on microsoft/beit-base-patch16-224-pt22k-ft22k
Image Classification Transformers
B
sabhashanki
16
0
Vegetation Classification Model
Apache-2.0
A Vision Transformer-based image classification model for identifying vegetation in street view images, achieving an accuracy of 92.9%.
Image Classification Transformers English
V
iammartian0
21
1
Swin Tiny Patch4 Window7 224 Finetuned Og Dataset 10e
Apache-2.0
Vision model based on Swin Transformer Tiny architecture, fine-tuned for 10 epochs on image classification tasks
Image Classification Transformers
S
Gokulapriyan
19
0
Swin Tiny Patch4 Window7 224 Finetuned Eurosat
Apache-2.0
A fine-tuned image classification model based on Swin Transformer Tiny architecture, trained on image folder datasets
Image Classification Transformers
S
Celal11
18
0
Convnext Tiny 224 Finetuned Eurosat Albumentations
Apache-2.0
A fine-tuned image classification model based on ConvNeXt-Tiny architecture, achieving 98.15% accuracy on the EuroSAT dataset
Image Classification Transformers
C
toshio19910306
18
0
Vit Base Patch16 224 Finetuned
Apache-2.0
An image classification model fine-tuned based on Google's Vision Transformer (ViT), trained on custom image datasets
Image Classification Transformers
V
clp
30
0
Vit Large Patch32 384 Melanoma
Apache-2.0
A melanoma image classification model fine-tuned based on Google's ViT-Large model, achieving 82.73% accuracy on the evaluation set
Image Classification Transformers
V
UnipaPolitoUnimore
100
1
Vit Base Patch16 224 In21k Wwwwii
Apache-2.0
This model is a fine-tuned version of Google's ViT model on an unknown dataset, primarily used for image classification tasks.
Image Classification Transformers
V
Zynovia
22
0
Exper Batch 16 E8
Apache-2.0
An image classification model fine-tuned on the sudo-s/herbier_mesuem1 dataset based on google/vit-base-patch16-224-in21k, achieving 91.29% accuracy
Image Classification Transformers
E
sudo-s
30
0
Swin Tiny Patch4 Window7 224 Finetuned Mri
Apache-2.0
This model is a fine-tuned version based on the Swin Transformer architecture, specifically designed for MRI image classification tasks, achieving an accuracy of 98.07% on the evaluation set.
Image Classification Transformers
S
raedinkhaled
14
1
Indonesian Roberta Base Emotion Classifier
MIT
An Indonesian emotion classifier trained on the Indo-roberta model, fine-tuned on the IndoNLU EmoT dataset for sentiment analysis tasks.
Text Classification Transformers Other
I
StevenLimcorn
767
14
Minilm L3 H384 Uncased
MIT
This is a streamlined 3-layer version of the Microsoft MiniLM-L12-H384-uncased model, retaining only layers [3,7,11] from the original model.
Large Language Model Transformers
M
nreimers
324
3
BERT Of Theseus MNLI
A compressed BERT model achieved through progressive module replacement, reducing model complexity while maintaining performance
Large Language Model
B
canwenxu
31
0
Bert Base Finnish Uncased V1
FinBERT is a Finnish pre-trained language model based on Google's BERT architecture, trained on over 3 billion Finnish word tokens and suitable for various Finnish NLP tasks.
Large Language Model Other
B
TurkuNLP
1,964
0
Biobertpt All
BERT-based Portuguese model trained on clinical records and biomedical literature
Large Language Model Other
B
pucpr
1,460
23
Minilmv2 L6 H384 Distilled From BERT Base
MiniLMv2 is a lightweight pre-trained language model introduced by Microsoft, achieving efficient inference through knowledge distillation technology.
Large Language Model Transformers
M
nreimers
179
0
Gerpt2
MIT
GerPT2 is a large German language model based on the GPT2 architecture, trained on the CC-100 and German Wikipedia datasets, outperforming similar German GPT2 models.
Large Language Model German
G
benjamin
48
5
Stsb Xlm R Greek Transfer
Apache-2.0
This is a sentence transformer model based on XLM-Roberta-base, specifically designed for semantic text similarity tasks in Greek and English.
Text Embedding Transformers Supports Multiple Languages
S
lighteternal
1,219
6
Bert Base Turkish Ner Cased
This is a BERT-based Turkish named entity recognition model, suitable for entity recognition tasks in Turkish texts.
Sequence Labeling Other
B
savasy
1,269
18
Bert Large Uncased Sparse 90 Unstructured Pruneofa
Apache-2.0
This model is a sparse pre-trained model achieving 90% sparsity through weight pruning techniques, suitable for fine-tuning on various language tasks.
Large Language Model Transformers English
B
Intel
13
1
Code Trans T5 Large Source Code Summarization Python Transfer Learning Finetune
Pre-trained model based on the t5-large architecture, specializing in Python code summarization tasks
Text Generation
C
SEBIS
29
0
Bert Base Uncased Sparse 90 Unstructured Pruneofa
Apache-2.0
This is a sparsely pretrained BERT-Base model achieving 90% weight sparsity through one-shot pruning, suitable for fine-tuning on various language tasks.
Large Language Model Transformers English
B
Intel
178
0
Roberta Tagalog Base
This is a pretrained language model for Tagalog, trained on multi-source data, aimed at improving the performance of Tagalog natural language processing tasks.
Large Language Model Transformers
R
GKLMIP
23
1
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase